Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Streaming in Getting Started doc for Next.js + Vercel AI #98

Merged
merged 4 commits into from
Jun 22, 2024

Conversation

FranciscoMoretti
Copy link
Contributor

Changes to the getting started doc for Next.js + Vercel IA to do text streaming instead of batch.

For reviewers:

  • Is there an example that I can apply this to as well? None of them use Vercel AI.
  • Kept the onFinish empty function just like in Vercel IA sample. I don't have a strong opinion about keeping/removing that piece.
  • Is there a way to simplify the adapter?

@salmenus
Copy link
Member

salmenus commented Jun 22, 2024

Thanks for the PR @FranciscoMoretti

Is there an example that I can apply this to as well? None of them use Vercel AI.

No. Right now, live examples on the website do not include Next.js ( they are only React ). You can add an example for Next.js if you want. You may need to update the example runner.

Kept the onFinish empty function just like in Vercel IA sample. I don't have a strong opinion about keeping/removing that piece.

For for me.

Is there a way to simplify the adapter?
We can probably offer utils to simplify streaming.

For standardized APIs (such as LangChain LangServe and HuggingFace Inference, developers don't need to write code, they just provide config). We can apply a similar approach maybe for Next.js, but I don't think it should be the only way integrate it with NLUX, as users may want to have more control and flexibility.

✅ PR Approved. Merging.

@salmenus salmenus merged commit e3c429c into nlkitai:latest Jun 22, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants